Goto

Collaborating Authors

 criminal law


WangchanThaiInstruct: An instruction-following Dataset for Culture-Aware, Multitask, and Multi-domain Evaluation in Thai

Limkonchotiwat, Peerat, Tuchinda, Pume, Lowphansirikul, Lalita, Nonesung, Surapon, Tasawong, Panuthep, Aji, Alham Fikri, Udomcharoenchaikit, Can, Nutanong, Sarana

arXiv.org Artificial Intelligence

Large language models excel at instruction-following in English, but their performance in low-resource languages like Thai remains underexplored. Existing benchmarks often rely on translations, missing cultural and domain-specific nuances needed for real-world use. We present WangchanThaiInstruct, a human-authored Thai dataset for evaluation and instruction tuning, covering four professional domains and seven task types. Created through a multi-stage quality control process with annotators, domain experts, and AI researchers, WangchanThaiInstruct supports two studies: (1) a zero-shot evaluation showing performance gaps on culturally and professionally specific tasks, and (2) an instruction tuning study with ablations isolating the effect of native supervision. Models fine-tuned on WangchanThaiInstruct outperform those using translated data in both in-domain and out-of-domain benchmarks. These findings underscore the need for culturally and professionally grounded instruction data to improve LLM alignment in low-resource, linguistically diverse settings.


LegalChainReasoner: A Legal Chain-guided Framework for Criminal Judicial Opinion Generation

Shi, Weizhe, Wang, Qiqi, Pan, Yihong, Liu, Qian, Zhao, Kaiqi

arXiv.org Artificial Intelligence

A criminal judicial opinion represents the judge's disposition of a case, including the decision rationale and sentencing. Automatically generating such opinions can assist in analyzing sentencing consistency and provide judges with references to similar past cases. However, current research typically approaches this task by dividing it into two isolated subtasks: legal reasoning and sentencing prediction. This separation often leads to inconsistency between the reasoning and predictions, failing to meet real-world judicial requirements. Furthermore, prior studies rely on manually curated knowledge to enhance applicability, yet such methods remain limited in practical deployment. To address these limitations and better align with legal practice, we propose a new LegalAI task: Judicial Opinion Generation, which simultaneously produces both legal reasoning and sentencing decisions. To achieve this, we introduce LegalChainReasoner, a framework that applies structured legal chains to guide the model through comprehensive case assessments. By integrating factual premises, composite legal conditions, and sentencing conclusions, our approach ensures flexible knowledge injection and end-to-end opinion generation. Experiments on two real-world and open-source Chinese legal case datasets demonstrate that our method outperforms baseline models.


Agents on the Bench: Large Language Model Based Multi Agent Framework for Trustworthy Digital Justice

Jiang, Cong, Yang, Xiaolei

arXiv.org Artificial Intelligence

The justice system has increasingly employed AI techniques to enhance efficiency, yet limitations remain in improving the quality of decision-making, particularly regarding transparency and explainability needed to uphold public trust in legal AI. To address these challenges, we propose a large language model based multi-agent framework named AgentsBench, which aims to simultaneously improve both efficiency and quality in judicial decision-making. Our approach leverages multiple LLM-driven agents that simulate the collaborative deliberation and decision making process of a judicial bench. We conducted experiments on legal judgment prediction task, and the results show that our framework outperforms existing LLM based methods in terms of performance and decision quality. By incorporating these elements, our framework reflects real-world judicial processes more closely, enhancing accuracy, fairness, and society consideration. AgentsBench provides a more nuanced and realistic methods of trustworthy AI decision-making, with strong potential for application across various case types and legal scenarios.


The Reasonable Person Standard for AI

Rane, Sunayana

arXiv.org Artificial Intelligence

As AI systems are increasingly incorporated into domains where human behavior has set the norm, a challenge for AI governance and AI alignment research is to regulate their behavior in a way that is useful and constructive for society. One way to answer this question is to ask: how do we govern the human behavior that the models are emulating? To evaluate human behavior, the American legal system often uses the "Reasonable Person Standard." The idea of "reasonable" behavior comes up in nearly every area of law. The legal system often judges the actions of parties with respect to what a reasonable person would have done under similar circumstances. This paper argues that the reasonable person standard provides useful guidelines for the type of behavior we should develop, probe, and stress-test in models. It explains how reasonableness is defined and used in key areas of the law using illustrative cases, how the reasonable person standard could apply to AI behavior in each of these areas and contexts, and how our societal understanding of "reasonable" behavior provides useful technical goals for AI researchers.


Major UK retailers urged to quit 'authoritarian' police facial recognition strategy

The Guardian > Business

Some of Britain's biggest retailers, including Tesco, John Lewis and Sainsbury's, have been urged to pull out of a new policing strategy amid warnings it risks wrongly criminalising people of colour, women and LGBTQ people. A coalition of 14 human rights groups has written to the main retailers – also including Marks & Spencer, the Co-op, Next, Boots and Primark – saying that their participation in a new government-backed scheme that relies heavily on facial recognition technology to combat shoplifting will "amplify existing inequalities in the criminal justice system". The letter, from Liberty, Amnesty International and Big Brother Watch, among others, questions the unchecked rollout of a technology that has provoked fierce criticism over its impact on privacy and human rights at a time when the European Union is seeking to ban the technology in public spaces through proposed legislation. "Facial recognition technology notoriously misidentifies people of colour, women and LGBTQ people, meaning that already marginalised groups are more likely to be subject to an invasive stop by police, or at increased risk of physical surveillance, monitoring and harassment by workers in your stores," the letter states.Its authors also express dismay that the move will "reverse steps" that big retailers introduced during the Black Lives Matter movement, including high-profile commitments to be champions of diversity, equality and inclusion. Meanwhile, concerns over the broadening use of facial recognition technology have further intensified after the emergence of details of a police watchlist used to justify the contentious decision to use biometric surveillance at July's Formula One British Grand Prix at Silverstone.


Eight Months Pregnant and Arrested After False Facial Recognition Match

NYT > Business Day

After being charged in court with robbery and carjacking, Ms. Woodruff was released that evening on a $100,000 personal bond. In an interview, she said she went straight to the hospital where she was diagnosed with dehydration and given two bags of intravenous fluids. A month later, the Wayne County prosecutor dismissed the case against her. The ordeal started with an automated facial recognition search, according to an investigator's report from the Detroit Police Department. Ms. Woodruff is the sixth person to report being falsely accused of a crime as a result of facial recognition technology used by police to match an unknown offender's face to a photo in a database.


Artificial Intelligence is about to defend a human in court 'for the first time ever'

#artificialintelligence

Artificial Intelligence is breaking a new frontier, with a company teasing that their robot will be playing an important part in a trial in court. The AI robot will be the first to advise a defendant in a court of law. The news was shared by the publication New Scientist, which explained that the AI would be in the defendant's phone. The robot would listen in on court proceedings and would then advise the defendant through an earpiece. The AI was developed by a company called DoNotPay, which describes itself as "The World's First Robot Lawyer." The company was founded by Joshua Browder and is described as a chatbot.


Japan online watchdog gets power to request removal of gun-making info

The Japan Times

Japan's internet watchdog can request instructional posts related to murder, guns and explosives be removed from March, police said Thursday, as authorities respond to former Prime Minister Shinzo Abe's suspected killer using information found online to build weapons. The National Police Agency said by adding to the types of content that can be requested for removal by internet service providers, it aims to prevent crimes before they occur. To strengthen its online surveillance, the agency said it will also consider using artificial intelligence to analyze social media posts. This could be due to a conflict with your ad-blocking or security software. Please add japantimes.co.jp and piano.io to your list of allowed sites.


Artificial Intelligence is about to defend a human in court 'for the first time ever'

#artificialintelligence

ARTIFICIAL Intelligence is breaking a new frontier, with a company teasing that their robot will be playing an important part in a trial in court. The AI robot will be the first to advise a defendant in a court of law. The news was shared by the publication New Scientist, which explained that the AI would be in the defendant's phone. The robot would listen in on court proceedings and would then advise the defendant through an earpiece. The AI was developed by a company called DoNotPay, which describes itself as "The World's First Robot Lawyer."


Drone video captures part of encounter in which Riverside County deputy was fatally shot

Los Angeles Times

Drone video of a deadly standoff that took the life of a Riverside County sheriff's deputy on Friday appears to capture the suspect being shot by another deputy on the street before the wounded officer is rushed into the back of a sheriff's SUV. The video was posted to YouTube late Friday by a brand new account created the same day. Its owner could not immediately be reached for comment by The Times on Saturday. The video provided the clearest picture yet of the encounter, which took the life of Deputy Darnell Calhoun, 30. The sheriff's department said Calhoun was fatally shot after responding to an "unknown trouble" call about 4:20 p.m. in the 18500 block of Hilldale Lane, a residential area in Lakeland Village.